Ashish Sabharwal Cornell University

نویسنده

  • Ashish Sabharwal
چکیده

ion in AI Planning: Abstraction is a commonly employed technique, especially in large scale model checking and planning, that attempts to improve efficiency by abstracting away non-critical details of the problem at hand and thus reducing the size of the raw search space. Can abstraction methods really achieve much benefit in AI planning systems? While this empirically does seem to be the case for BDD-based symbolic planners (which essentially perform a fast blind search) and for heuristic planners such as IPP, this work [ICAPS-06, JAIR-09] provided a rather surprising provably negative answer for the best-case behavior of Resolution-based optimal planners such as the award winning SATPLAN planner: all of commonly used abstraction methods cannot improve the best case behavior of SATPLAN under several encodings typically used in practice. At a high level, this showed that the “informedness” of the search method must compete with the informedness of the abstraction heuristic, providing new insights into the design of abstraction techniques. This work was nominated for the Best Paper Award at the ICAPS-06 conference. Hardness Profiles and Problem Structure: Constraint satisfaction problems often exhibit an intriguing pattern: an abrupt phase transition from being feasible to being infeasible as a key problem parameter is varied. This work [CPAIOR-07/08], with direct application to “wildlife corridor” design for grizzly bears in the U.S. Northern Rockies discussed earlier, empirically revealed for the first time such phenomena—and a corresponding “easy-hard-easy” pattern—for problems that combine both constraint satisfaction and optimization aspects. In ongoing work, we have discovered—for the first time—heavy tail runtime distribution Ashish Sabharwal, January 2010 5 of 6 patterns in local search solvers. In a related direction [CP-07, ISAIM-08], our work has brought to light the fundamental strength of the notion of propagation-based “backdoor sets” used to characterize real-world structure in combinatorial problems and explain the astonishing scalability of SAT solvers on structured industrial benchmarks. Specifically, we showed that dynamic, propagation based backdoors used by these solvers can be exponentially smaller, and thus more powerful, than those based on static, syntactic classes such as 2-CNF or Horn SAT used in certain formal analyses, and also provably harder to compute in the worst case assuming NP 6= coNP. Building upon this work further [SAT-09], we extended the notion of backdoors to capture learning during search—a key ingredient of many successful state-of-the-art constraint solvers—and showed that learning again can result in exponentially more compact backdoors. Finally, we extended the notion of backdoors, defined originally for constraint satisfaction problems, to general optimization problems and demonstrated that interesting MIP (mixed integer programming) optimization problems do have surprisingly short backdoor sets [CPAIOR-09]. In fact, contrary to the general intuition about computational hardness in optimization, we found that for some problems, backdoors for proving optimality were significantly smaller than those for finding an optimal solution. D. FOUNDATIONAL ISSUES: ALGORITHM DESIGN AND PROOF COMPLEXITY The final theme of my research involves addressing foundational issues underlying automated reasoning systems. Specifically, I design efficient algorithms for constraint solvers and characterize the strength of various “proof systems”. This has resulted in the first polynomial time algorithms in some cases, and NP-completeness or hardness of approximation results in other. Two examples of my work in this area are: Filtering Algorithms for Special Constraints: This work [CP-06, Constraints J.-09] was recognized with the Best Paper Award at CP-06, the 12 International Conference on Principles and Practice of Constraint Programming. It introduced the first polynomial time filtering algorithm for a combinatorial constraint (the “sequence” constraint) that appears frequently in scheduling and design automation problems, such as in a car manufacturing pipeline. This resolved a question that had been open for 10 years in the Constraint Programming (CP) community. In related work [ModRef-07, CPAIOR-08, ongoing], our algorithms have revealed the exponential memory and runtime savings that higher-level set-based representations of constraints can achieve. This work provided some of the first polynomial time filtering algorithms for constraints in this higher-level representation. Resolution Complexity and Hardness of Approximation: This work in the field of proof complexity theory [Complexity-01, Computational Complexity J.-07] studied the Resolution proof system in semi-structured domains. It showed that almost all instances of some interesting co-NP complete graph problems require exponential size Resolution proofs of infeasibility, even to approximate within significant factors, thus providing a large family of structured formulas that are exponentially hard for the Resolution proof system. The methodology involved a blend of combinatorial and probabilistic analysis, and expansion properties of random graphs. The work also showed that a natural class of approximate optimization algorithms for these problems must fail to provide good approximations on almost all problem instances. In related work [FOCS-02, SIAM J. Computing-04], we proved that even stronger proof systems, such as bounded-depth Frege systems described in many logic texts, require exponential size proofs even for very weak pigeonhole formulas, strengthening previously known results in this area. Ashish Sabharwal, January 2010 6 of 6

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Understanding Sampling-based Adversarial Search Methods

Until 2007, the best computer programs for playing the board game Go performed at the level of a weak amateur, while employing the same Minimax algorithm that had proven so successful in other games such as Chess and Checkers. Thanks to a revolutionary new sampling-based planning approach named Upper Confidence bounds applied to Trees (UCT), today's best Go programs play at a master level on fu...

متن کامل

Robust adiabatic T2 preparation for fast whole brain spiral myelin water imaging at 3 Tesla

Fig.5. Examples of T2W anatomical images and corresponding MWF maps from an RRMS patient (arrows indicate lesions). Fig.4. Examples of spiral T2W source images and MWF maps obtained with COMP T2prep showing severe artifacts (arrows). BIR-4 T2prep provides artifact free source images and MWF maps with exquisite details in the cortical areas. Robust adiabatic T2 preparation for fast whole brain s...

متن کامل

From Sampling to Model Counting

We introduce a new technique for counting models of Boolean satisfiability problems. Our approach incorporates information obtained from sampling the solution space. Unlike previous approaches, our method does not require uniform or near-uniform samples. It instead converts local search sampling without any guarantees into very good bounds on the model count with guarantees. We give a formal an...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010